1. Recap

In the last mission, we explored how to use a simple k-nearest neighbors machine learning model that used just one feature, or attribute, of the listing to predict the rent price. We first relied on the accommodates column, which describes the number of people a living space can comfortably accommodate. Then, we switched to the bathrooms column and observed an improvement in accuracy. While these were good features to become familiar with the basics of machine learning, it's clear that using just a single feature to compare listings doesn't reflect the reality of the market. An apartment that can accommodate 4 guests in a popular part of Washington D.C. will rent for much higher than one that can accommodate 4 guests in a crime ridden area.

There are 2 ways we can tweak the model to try to improve the accuracy (decrease the RMSE during validation):

  • increase the number of attributes the model uses to calculate similarity when ranking the closest neighbors
  • increase k, the number of nearby neighbors the model uses when computing the prediction

In this mission, we'll focus on increasing the number of attributes the model uses. When selecting more attributes to use in the model, we need to watch out for columns that don't work well with the distance equation. This includes columns containing:

  • non-numerical values (e.g. city or state)
    • Euclidean distance equation expects numerical values
  • missing values
    • distance equation expects a value for each observation and attribute
  • non-ordinal values (e.g. latitude or longitude)
    • ranking by Euclidean distance doesn't make sense if all attributes aren't ordinal

In the following code screen, we've read the dc_airbnb.csv dataset from the last mission into pandas and brought over the data cleaning changes we made. Let's first look at the first row's values to identify any columns containing non-numerical or non-ordinal values. In the next screen, we'll drop those columns and then look for missing values in each of the remaining columns.


Exercise Start.

Description:

  1. Use the DataFrame.info() method to return the number of non-null values in each column.

In [8]:
import pandas as pd
import numpy as np

In [9]:
np.random.seed(1)

In [10]:
dc_listings = pd.read_csv('dc_airbnb.csv')
dc_listings = dc_listings.loc[np.random.permutation(len(dc_listings))]
stripped_commas = dc_listings['price'].str.replace(',', '')
stripped_dollars = stripped_commas.str.replace('$', '')
dc_listings['price'] = stripped_dollars.astype('float')

In [11]:
dc_listings.info()


<class 'pandas.core.frame.DataFrame'>
Int64Index: 3723 entries, 574 to 1061
Data columns (total 19 columns):
host_response_rate      3289 non-null object
host_acceptance_rate    3109 non-null object
host_listings_count     3723 non-null int64
accommodates            3723 non-null int64
room_type               3723 non-null object
bedrooms                3702 non-null float64
bathrooms               3696 non-null float64
beds                    3712 non-null float64
price                   3723 non-null float64
cleaning_fee            2335 non-null object
security_deposit        1426 non-null object
minimum_nights          3723 non-null int64
maximum_nights          3723 non-null int64
number_of_reviews       3723 non-null int64
latitude                3723 non-null float64
longitude               3723 non-null float64
city                    3723 non-null object
zipcode                 3714 non-null object
state                   3723 non-null object
dtypes: float64(6), int64(5), object(8)
memory usage: 581.7+ KB

2. Removing features

The following columns contain non-numerical values:

  • room_type: e.g. Private room
  • city: e.g. Washington
  • state: e.g. DC

while these columns contain numerical but non-ordinal values:

  • latitude: e.g. 38.913458
  • longitude: e.g. -77.031
  • zipcode: e.g. 20009

Geographic values like these aren't ordinal, because a smaller numerical value doesn't directly correspond to a smaller value in a meaningful way. For example, the zip code 20009 isn't smaller or larger than the zip code 75023 and instead both are unique, identifier values. Latitude and longitude value pairs describe a point on a geographic coordinate system and different equations are used in those cases (e.g. haversine).

While we could convert the host_response_rate and host_acceptance_rate columns to be numerical (right now they're object data types and contain the % sign), these columns describe the host and not the living space itself. Since a host could have many living spaces and we don't have enough information to uniquely group living spaces to the hosts themselves, let's avoid using any columns that don't directly describe the living space or the listing itself:

  • host_response_rate
  • host_acceptance_rate
  • host_listings_count

Let's remove these 9 columns from the Dataframe


Exercise Start.

Description:

  1. Remove the 9 columns we discussed above from dc_listings:
    • 3 containing non-numerical values
    • 3 containing numerical but non-ordinal values
    • 3 describing the host instead of the living space itself
  2. Verify the number of null values of each remain columns

In [12]:
dc_listings.drop(labels=['room_type', 'city', 'state', 'latitude', 'longitude', 'zipcode', 'host_response_rate', 'host_acceptance_rate', 'host_listings_count'], axis=1, inplace=True)

In [13]:
dc_listings.info()


<class 'pandas.core.frame.DataFrame'>
Int64Index: 3723 entries, 574 to 1061
Data columns (total 10 columns):
accommodates         3723 non-null int64
bedrooms             3702 non-null float64
bathrooms            3696 non-null float64
beds                 3712 non-null float64
price                3723 non-null float64
cleaning_fee         2335 non-null object
security_deposit     1426 non-null object
minimum_nights       3723 non-null int64
maximum_nights       3723 non-null int64
number_of_reviews    3723 non-null int64
dtypes: float64(4), int64(4), object(2)
memory usage: 319.9+ KB

3. Handling missing values

Of the remaining columns, 3 columns have a few missing values (less than 1% of the total number of rows):

  • bedrooms
  • bathrooms
  • beds

Since the number of rows containing missing values for one of these 3 columns is low, we can select and remove those rows without losing much information. There are also 2 columns have a large number of missing values:

  • cleaning_fee - 37.3% of the rows
  • security_deposit - 61.7% of the rows

and we can't handle these easily. We can't just remove the rows containing missing values for these 2 columns because we'd miss out on the majority of the observations in the dataset. Instead, let's remove these 2 columns entirely from consideration.


Exercise Start.

Description:

  1. Drop the cleaning_fee and security_deposit columns from dc_listings.
  2. Then, remove all rows that contain a missing value for the bedrooms, bathrooms, or beds column from dc_listings.
    • You can accomplish this by using the Dataframe method dropna() and setting the axis parameter to 0.
    • Since only the bedrooms, bathrooms and beds columns contain any missing values, rows containing missing values in these columns will be removed.
  3. Display the null value counts for the updated dc_listings Dataframe to confirm that there are no missing values left.

In [14]:
#Drop feature
dc_listings.drop(labels=['cleaning_fee', 'security_deposit'], axis=1,inplace=True)
#Drop NA rows
dc_listings.dropna( subset=['bedrooms','bathrooms','beds'], axis=0, how='any', inplace=True )

In [15]:
dc_listings.info()


<class 'pandas.core.frame.DataFrame'>
Int64Index: 3671 entries, 574 to 1061
Data columns (total 8 columns):
accommodates         3671 non-null int64
bedrooms             3671 non-null float64
bathrooms            3671 non-null float64
beds                 3671 non-null float64
price                3671 non-null float64
minimum_nights       3671 non-null int64
maximum_nights       3671 non-null int64
number_of_reviews    3671 non-null int64
dtypes: float64(4), int64(4)
memory usage: 258.1 KB

4. Normalize columns

Here's how the dc_listings Dataframe looks like after all the changes we made:

accommodates bedrooms bathrooms beds price minimum_nights maximum_nights number_of_reviews
2 1.0 1.0 1.0 125.0 1 4 149
2 1.0 1.5 1.0 85.0 1 30 49
1 1.0 0.5 1.0 50.0 1 1125 1
2 1.0 1.0 1.0 209.0 4 730 2
12 5.0 2.0 5.0 215.0 2 1825 34

You may have noticed that while the accommodates, bedrooms, bathrooms, beds, and minimum_nights columns hover between 0 and 12 (at least in the first few rows), the values in the maximum_nights and number_of_reviews columns span much larger ranges. For example, the maximum_nights column has values as low as 4 and high as 1825, in the first few rows itself. If we use these 2 columns as part of a k-nearest neighbors model, these attributes could end up having an outsized effect on the distance calculations because of the largeness of the values.

For example, 2 living spaces could be identical across every attribute but be vastly different just on the maximum_nights column. If one listing had a maximum_nights value of 1825 and the other a maximum_nights value of 4, because of the way Euclidean distance is calculated, these listings would be considered very far apart because of the outsized effect the largeness of the values had on the overall Euclidean distance. To prevent any single column from having too much of an impact on the distance, we can normalize all of the columns to have a mean of 0 and a standard deviation of 1.

Normalizing the values in each columns to the standard normal distribution (mean of 0, standard deviation of 1) preserves the distribution of the values in each column while aligning the scales. To normalize the values in a column to the standard normal distribution, you need to:

  • from each value, subtract the mean of the column
  • divide each value by the standard deviation of the column

Here's the mathematical formula describing the transformation that needs to be applied for all values in a column:

$\displaystyle z= \frac{x − \mu}{\sigma}$

where x is a value in a specific column, $\mu$ is the mean of all the values in the column, and $\sigma$ is the standard deviation of all the values in the column. Here's what the corresponding code, using pandas, looks like:

# Subtract each value in the column by the mean.
first_transform = dc_listings['maximum_nights'] - dc_listings['maximum_nights'].mean()
# Divide each value in the column by the standard deviation.
normalized_col = first_transform / dc_listings['maximum_nights'].std()

To apply this transformation across all of the columns in a Dataframe, you can use the corresponding Dataframe methods mean() and std():

normalized_listings = (dc_listings - dc_listings.mean()) / (dc_listings.std())

These methods were written with mass column transformation in mind and when you call mean() or std(), the appropriate column means and column standard deviations are used for each value in the Dataframe. Let's now normalize all of the feature columns in dc_listings.


Exercise Start.

Description:

  1. Normalize all of the feature columns in dc_listings and assign the new Dataframe containing just the normalized feature columns to normalized_listings.
  2. Add the price column from dc_listings to normalized_listings.
  3. Display the first 3 rows in normalized_listings.

In [16]:
normalized_listings = (dc_listings - dc_listings.mean()) / (dc_listings.std())
normalized_listings['price'] = dc_listings['price']

In [17]:
normalized_listings.head()


Out[17]:
accommodates bedrooms bathrooms beds price minimum_nights maximum_nights number_of_reviews
574 -0.596544 -0.249467 -0.439151 -0.546858 125.0 -0.341375 -0.016604 4.579650
1593 -0.596544 -0.249467 0.412923 -0.546858 85.0 -0.341375 -0.016603 1.159275
3091 -1.095499 -0.249467 -1.291226 -0.546858 50.0 -0.341375 -0.016573 -0.482505
420 -0.596544 -0.249467 -0.439151 -0.546858 209.0 0.487635 -0.016584 -0.448301
808 4.393004 4.507903 1.264998 2.829956 215.0 -0.065038 -0.016553 0.646219

5. Euclidean distance for multivariate case

In the last mission, we trained 2 univariate k-nearest neighbors models. The first one used the accommodates attribute while the second one used the bathrooms attribute. Let's now train a model that uses both attributes when determining how similar 2 living spaces are. Let's refer to the Euclidean distance equation again to see what the distance calculation using 2 attributes would look like:

$\displaystyle d = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + \ldots + (q_n - p_n)^2}$

Since we're using 2 attributes, the distance calculation would look like:

$\displaystyle d = \sqrt{(accommodates_1 - accomodates_2)^2 + (bathrooms_1 - bathrooms_2)^2}$

To find the distance between 2 living spaces, we need to calculate the squared difference between both accommodates values, the squared difference between both bathrooms values, add them together, and then take the square root of the resulting sum. Here's what the Euclidean distance between the first 2 rows in normalized_listings looks like:

So far, we've been calculating Euclidean distance ourselves by writing the logic for the equation ourselves. We can instead use the distance.euclidean() function from scipy.spatial, which takes in 2 vectors as the parameters and calculates the Euclidean distance between them. The euclidean() function expects:

  • both of the vectors to be represented using a list-like object (Python list, NumPy array, or pandas Series)
  • both of the vectors must be 1-dimensional and have the same number of elements

Here's a simple example:

from scipy.spatial import distance
first_listing = [-0.596544, -0.439151]
second_listing = [-0.596544, 0.412923]
dist = distance.euclidean(first_listing, second_listing)

Let's use the euclidean() function to calculate the Euclidean distance between 2 rows in our dataset to practice.


Exercise Start.

Description:

  1. Calculate the Euclidean distance using only the accommodates and bathrooms features between the first row and fifth row in normalized_listings using the distance.euclidean() function.
  2. Assign the distance value to first_fifth_distance and display using the print function.

In [18]:
from scipy.spatial import distance

features = ['accommodates', 'bathrooms']

first = normalized_listings[features].iloc[0]
fifth = normalized_listings[features].iloc[4]

first_fifth_distance = distance.euclidean( first, fifth )
print(first_fifth_distance)


5.272543124668404

6. Introduction to scikit-learn

So far, we've been writing functions from scratch to train the k-nearest neighbor models. While this is helpful deliberate practice to understand how the mechanics work, you can be more productive and iterate quicker by using a library that handles most of the implementation. In this screen, we'll learn about the scikit-learn library, which is the most popular machine learning in Python. Scikit-learn contains functions for all of the major machine learning algorithms and a simple, unified workflow. Both of these properties allow data scientists to be incredibly productive when training and testing different models on a new dataset.

The scikit-learn workflow consists of 4 main steps:

  • instantiate the specific machine learning model you want to use
  • fit the model to the training data
  • use the model to make predictions
  • evaluate the accuracy of the predictions

We'll focus on the first 3 steps in this screen and the next screen. Each model in scikit-learn is implemented as a separate class and the first step is to identify the class we want to create an instance of. In our case, we want to use the KNeighborsRegressor class. Any model that helps us predict numerical values, like listing price in our case, is known as a regression model. The other main class of machine learning models is called classification, where we're trying to predict a label from a fixed set of labels (e.g. blood type or gender). The word regressor from the class name KNeighborsRegressor refers to the regression model class that we just discussed.

Scikit-learn uses a similar object-oriented style to Matplotlib and you need to instantiate an empty model first by calling the constructor:

from sklearn.neighbors import KNeighborsRegressor
knn = KNeighborsRegressor()

If you refer to the documentation, you'll notice that by default:

  • n_neighbors: the number of neighbors, is set to 5
  • algorithm: for computing nearest neighbors, is set to auto
  • p: set to 2, corresponding to Euclidean distance

Let's set the algorithm parameter to brute and leave the n_neighbors value as 5, which matches the implementation we wrote in the last mission. If we leave the algorithm parameter set to the default value of auto, scikit-learn will try to use tree-based optimizations to improve performance (which are outside of the scope of this mission):

knn = KNeighborsRegressor(algorithm='brute')

7. Fitting a model and making predictions

Now, we can fit the model to the data using the fit method. For all models, the fit method takes in 2 required parameters:

  • matrix-like object, containing the feature columns we want to use from the training set.
  • list-like object, containing correct target values.

Matrix-like object means that the method is flexible in the input and either a Dataframe or a NumPy 2D array of values is accepted. This means you can select the columns you want to use from the Dataframe and use that as the first parameter to the fit method.

If you recall from earlier in the mission, all of the following are acceptable list-like objects:

  • NumPy array
  • Python list
  • pandas Series object (e.g. when selecting a column)

You can select the target column from the Dataframe and use that as the second parameter to the fit method:

# Split full dataset into train and test sets.
train_df = normalized_listings.iloc[0:2792]
test_df = normalized_listings.iloc[2792:]
# Matrix-like object, containing just the 2 columns of interest from training set.
train_features = train_df[['accommodates', 'bathrooms']]
# List-like object, containing just the target column, `price`.
train_target = normalized_listings['price']
# Pass everything into the fit method.
knn.fit(train_features, train_target)

When the fit method is called, scikit-learn stores the training data we specified within the KNearestNeighbors instance (knn). If you try passing in data containing missing values or non-numerical values into the fit method, scikit-learn will return an error. Scikit-learn contains many such features that help prevent us from making common mistakes.

Now that we specified the training data we want used to make predictions, we can use the predict method to make predictions on the test set. The predict method has only one required parameter:

  • matrix-like object, containing the feature columns from the dataset we want to make predictions on

The number of feature columns you use during both training and testing need to match or scikit-learn will return an error:

predictions = knn.predict(test_df[['accommodates', 'bathrooms']])

The predict() method returns a NumPy array containing the predicted price values for the test set. You now have everything you need to practice the entire scikit-learn workflow.


Exercise Start.

Description:

  1. Create an instance of the KNeighborsRegressor class with the following parameters:
    • n_neighbors: 5
    • algorithm: brute
  2. Use the fit method to specify the data we want the k-nearest neighbor model to use. Use the following parameters:
    • training data, feature columns: just the accommodates and bathrooms columns, in that order, from train_df.
    • training data, target column: the price column from train_df.
  3. Call the predict method to make predictions on:
    • the accommodates and bathrooms columns from test_df
    • assign the resulting NumPy array of predicted price values to predictions.

In [19]:
from sklearn.neighbors import KNeighborsRegressor

train_df = normalized_listings.iloc[0:2792]
test_df = normalized_listings.iloc[2792:]

In [30]:
knn = KNeighborsRegressor(algorithm='brute')

train_features = train_df[features]
train_target = train_df['price']

knn.fit(train_features, train_target)

predictions = knn.predict(test_df[features])
predictions[:5]


Out[30]:
array([  80.8,  279.4,   97. ,   80.8,   80.8])

8. Calculating MSE using Scikit-Learn

Earlier in this mission, we calculated the MSE and RMSE values using the pandas arithmetic operators to compare each predicted value with the actual value from the price column of our test set. Alternatively, we can instead use the sklearn.metrics.mean_squared_error function(). Once you become familiar with the different machine learning concepts, unifying your workflow using scikit-learn helps save you a lot of time and avoid mistakes.

The mean_squared_error() function takes in 2 inputs:

  • list-like object, representing the true values
  • list-like object, representing the predicted values using the model

For this function, we won't show any sample code and will leave it to you to understand the function from the documentation itself to calculate the MSE and RMSE values for the predictions we just made.


Exercise Start.

Description:

  1. Use the mean_squared_error function to calculate the MSE value for the predictions we made in the previous screen.
  2. Assign the MSE value to two_features_mse.
  3. Calculate the RMSE value by taking the square root of the MSE value and assign to two_features_rmse.
  4. Display both of these error scores using the print function.

In [32]:
from sklearn.metrics import mean_squared_error

features_train_columns = ['accommodates', 'bathrooms']
knn = KNeighborsRegressor(n_neighbors=5, algorithm='brute', metric='euclidean')
knn.fit(train_df[features_train_columns], train_df['price'])
predictions = knn.predict(test_df[features_train_columns])

two_features_mse = mean_squared_error(y_true = test_df['price'], y_pred = predictions)
two_features_rmse = np.sqrt(two_features_mse)

print('MSE: %.2f' % two_features_mse)
print('RMSE: %.2f' % two_features_rmse)


MSE: 15184.43
RMSE: 123.23

9. Using more features

Here's a table comparing the MSE and RMSE values for the 2 univariate models from the last mission and the multivariate model we just trained:

feature(s) MSE RMSE
accommodates 18646.5 136.6
bathrooms 17333.4 131.7
accommodates, bathrooms 15660.4 125.1

As you can tell, the model we trained using both features ended up performing better (lower error score) than either of the univariate models from the last mission. Let's now train a model using the following 4 features:

  • accommodates
  • bedrooms
  • bathrooms
  • number_of_reviews

Scikit-learn makes it incredibly easy to swap the columns used during training and testing. We're going to leave this for you as a challenge to train and test a k-nearest neighbors model using these columns instead. Use the code you wrote in the last screen as a guide.


Exercise Start.

Description:

  1. Create a new instance of the KNeighborsRegressor class with the following parameters:
    • n_neighbors: 5
    • algorithm: brute
  2. Fit a model that uses the following columns from our training set (train_df):
    • accommodates
    • bedrooms
    • bathrooms
    • number_of_reviews
  3. Use the model to make predictions on the test set (test_df) using the same columns. Assign the NumPy array of predictions to four_predictions.
  4. Use the mean_squared_error() function to calculate the MSE value for these predictions by comparing four_predictions with the price column from test_df. Assign the computed MSE value to four_mse.
  5. Calculate the RMSE value and assign to four_rmse.
  6. Display four_mse and four_rmse using the print function.

In [40]:
features = ['bedrooms', 'accommodates', 'bathrooms', 'number_of_reviews']
k=5
knn = KNeighborsRegressor(n_neighbors=k, algorithm='brute', metric='euclidean')

knn.fit(train_df[features], train_df['price'])
four_predictions = knn.predict(test_df[features])

four_features_mse = mean_squared_error(y_true = test_df['price'], y_pred = four_predictions)
four_features_rmse = np.sqrt(four_features_mse)

print('MSE: %.2f' % four_features_mse)
print('RMSE: %.2f' % four_features_rmse)


MSE: 14044.07
RMSE: 118.51

10. Using all features

So far so good! As we increased the features the model used, we observed lower MSE and RMSE values:

feature(s) MSE RMSE
accommodates 18646.5 136.6
bathrooms 17333.4 131.7
accommodates, bathrooms 15660.4 125.1
accommodates, bathrooms, bedrooms, number_of_reviews 13320.2 115.4

Let's take this to the extreme and use all of the potential features. We should expect the error scores to decrease since so far adding more features has helped do so.


Exercise Start.

Description:

  1. Use all of the columns, except for the price column, to train a k-nearest neighbors model using the same parameters for the KNeighborsRegressor class as the ones from the last few screens.
  2. Use the model to make predictions on the test set and assign the resulting NumPy array of predictions to all_features_predictions.
  3. Calculate the MSE and RMSE values and assign to all_features_mse and all_features_rmse accordingly.
  4. Use the print function to display both error scores.

In [41]:
features = train_df.columns.tolist()
features.remove('price')
k=5
knn = KNeighborsRegressor(n_neighbors=k, algorithm='brute', metric='euclidean')

knn.fit(train_df[features], train_df['price'])
all_predictions = knn.predict(test_df[features])

all_features_mse = mean_squared_error(y_true = test_df['price'], y_pred = all_predictions)
all_features_rmse = np.sqrt(all_features_mse)

print('MSE: %.2f' % all_features_mse)
print('RMSE: %.2f' % all_features_rmse)


MSE: 15392.63
RMSE: 124.07

11. Next steps

Interestingly enough, the RMSE value actually increased to 125.1 when we used all of the features available to us. This means that selecting the right features is important and that using more features doesn't automatically improve prediction accuracy. We should re-phrase the lever we mentioned earlier from:

  • increase the number of attributes the model uses to calculate similarity when ranking the closest neighbors

to:

  • select the relevant attributes the model uses to calculate similarity when ranking the closest neighbors

The process of selecting features to use in a model is known as feature selection.

In this mission, we prepared the data to be able to use more features, trained a few models using multiple features, and evaluated the different performance tradeoffs. We explored how using more features doesn't always improve the accuracy of a k-nearest neighbors model. In the next mission, we'll explore another knob for tuning k-nearest neighbor models - the k value.